25 research outputs found

    Leveraging very-high spatial resolution hyperspectral and thermal UAV imageries for characterizing diurnal indicators of grapevine physiology

    Get PDF
    Efficient and accurate methods to monitor crop physiological responses help growers better understand crop physiology and improve crop productivity. In recent years, developments in unmanned aerial vehicles (UAV) and sensor technology have enabled image acquisition at very-high spectral, spatial, and temporal resolutions. However, potential applications and limitations of very-high-resolution (VHR) hyperspectral and thermal UAV imaging for characterization of plant diurnal physiology remain largely unknown, due to issues related to shadow and canopy heterogeneity. In this study, we propose a canopy zone-weighting (CZW) method to leverage the potential of VHR (≤9 cm) hyperspectral and thermal UAV imageries in estimating physiological indicators, such as stomatal conductance (Gs) and steady-state fluorescence (Fs). Diurnal flights and concurrent in-situ measurements were conducted during grapevine growing seasons in 2017 and 2018 in a vineyard in Missouri, USA. We used neural net classifier and the Canny edge detection method to extract pure vine canopy from the hyperspectral and thermal images, respectively. Then, the vine canopy was segmented into three canopy zones (sunlit, nadir, and shaded) using K-means clustering based on the canopy shadow fraction and canopy temperature. Common reflectance-based spectral indices, sun-induced chlorophyll fluorescence (SIF), and simplified canopy water stress index (siCWSI) were computed as image retrievals. Using the coefficient of determination (R2) established between the image retrievals from three canopy zones and the in-situ measurements as a weight factor, weighted image retrievals were calculated and their correlation with in-situ measurements was explored. The results showed that the most frequent and the highest correlations were found for Gs and Fs, with CZW-based Photochemical reflectance index (PRI), SIF, and siCWSI (PRICZW, SIFCZW, and siCWSICZW), respectively. When all flights combined for the given field campaign date, PRICZW, SIFCZW, and siCWSICZW significantly improved the relationship with Gs and Fs. The proposed approach takes full advantage of VHR hyperspectral and thermal UAV imageries, and suggests that the CZW method is simple yet effective in estimating Gs and Fs

    Utility of Daily 3 m Planet Fusion Surface Reflectance Data for Tillage Practice Mapping with Deep Learning

    Get PDF
    Tillage practices alter soil surface structure that can be potentially captured by satellite images with both high spatial and temporal resolution. This study explored tillage practice mapping using the daily Planet Fusion surface reflectance (PF-SR) gap-free 3 m data generated by fusing PlanetScope with Landsat-8, Sentinel-2 and MODIS surface reflectance data. The study area is a 220 Ă— 220 km2 agricultural area in South Dakota, USA, and the study used 3285 PF-SR images from September 1, 2020 to August 31, 2021. The PF-SR images for the surveyed 433 fields were sliced into 10,747 training (70%) and evaluation (30%) non-overlapping time series patches. The training and evaluation patches were from different fields for evaluation data independence. The performance of four deep learning models (i.e., 2D convolutional neural networks (CNN), 3D CNN, CNN-Long short-term memory (LSTM), and attention CNN-LSTM) in tillage practice mapping, as well as their sensitivity to different spatial (i.e., 3 m, 24 m, and 96 m) and temporal resolutions (16-day, 8-day, 4-day, 2-day and 1-day) were examined. Classification accuracy continuously increased with increases in both temporal and spatial resolutions. The optimal models (3D CNN and attention CNN-LSTM) achieved ~77% accuracy using 2-day or daily 3 m resolution data as opposed to ~72% accuracy using 16-day 3 m resolution data or daily 24 m resolution data. This study also analyzed the feature importance of different acquisition dates for the two optimal models. The 3D CNN model feature importances were found to agree well with the tillage practice time. High feature importance was associated with observations during the fall and spring tillage period (i.e., fresh tillage signals) whereas the crop peak growing period (i.e., tillage signals weathered and confounded by dense canopy) was characterized by a relatively low feature importance. The work provides valuable insights into the utility of deep learning for tillage mapping and change event time identification based on high resolution imagery

    Data-Driven Artificial Intelligence for Calibration of Hyperspectral Big Data

    Get PDF
    Near-earth hyperspectral big data present both huge opportunities and challenges for spurring developments in agriculture and high-throughput plant phenotyping and breeding. In this article, we present data-driven approaches to address the calibration challenges for utilizing near-earth hyperspectral data for agriculture. A data-driven, fully automated calibration workflow that includes a suite of robust algorithms for radiometric calibration, bidirectional reflectance distribution function (BRDF) correction and reflectance normalization, soil and shadow masking, and image quality assessments was developed. An empirical method that utilizes predetermined models between camera photon counts (digital numbers) and downwelling irradiance measurements for each spectral band was established to perform radiometric calibration. A kernel-driven semiempirical BRDF correction method based on the Ross Thick-Li Sparse (RTLS) model was used to normalize the data for both changes in solar elevation and sensor view angle differences attributed to pixel location within the field of view. Following rigorous radiometric and BRDF corrections, novel rule-based methods were developed to conduct automatic soil removal; and a newly proposed approach was used for image quality assessment; additionally, shadow masking and plot-level feature extraction were carried out. Our results show that the automated calibration, processing, storage, and analysis pipeline developed in this work can effectively handle massive amounts of hyperspectral data and address the urgent challenges related to the production of sustainable bioenergy and food crops, targeting methods to accelerate plant breeding for improving yield and biomass traits

    Urban tree species classification using UAV-based multi-sensor data fusion and machine learning

    No full text
    Urban tree species classification is a challenging task due to spectral and spatial diversity within an urban environment. Unmanned aerial vehicle (UAV) platforms and small-sensor technology are rapidly evolving, presenting the opportunity for a comprehensive multi-sensor remote sensing approach for urban tree classification. The objectives of this paper were to develop a multi-sensor data fusion technique for urban tree species classification with limited training samples. To that end, UAV-based multispectral, hyperspectral, LiDAR, and thermal infrared imagery was collected over an urban study area to test the classification of 96 individual trees from seven species using a data fusion approach. Two supervised machine learning classifiers, Random Forest (RF) and Support Vector Machine (SVM), were investigated for their capacity to incorporate highly dimensional and diverse datasets from multiple sensors. When using hyperspectral-derived spectral features with RF, the fusion of all features extracted from all sensor types (spectral, LiDAR, thermal) achieved the highest overall classification accuracy (OA) of 83.3% and kappa of 0.80. Despite multispectral reflectance bands alone producing significantly lower OA of 55.2% compared to 70.2% with minimum noise fraction (MNF) transformed hyperspectral reflectance bands, the full dataset combination (spectral, LiDAR, thermal) with multispectral-derived spectral features achieved an OA of 81.3% and kappa of 0.77 using RF. Comparison of the features extracted from individual sensors for each species highlight the ability for each sensor to identify distinguishable characteristics between species to aid classification. The results demonstrate the potential for a high-resolution multi-sensor data fusion approach for classifying individual trees by species in a complex urban environment under limited sampling requirements

    A Geographically Weighted Random Forest Approach to Predict Corn Yield in the US Corn Belt

    No full text
    Crop yield prediction before the harvest is crucial for food security, grain trade, and policy making. Previously, several machine learning methods have been applied to predict crop yield using different types of variables. In this study, we propose using the Geographically Weighted Random Forest Regression (GWRFR) approach to improve crop yield prediction at the county level in the US Corn Belt. We trained the GWRFR and five other popular machine learning algorithms (Multiple Linear Regression (MLR), Partial Least Square Regression (PLSR), Support Vector Regression (SVR), Decision Tree Regression (DTR), and Random Forest Regression (RFR)) with the following different sets of features: (1) full length features; (2) vegetation indices; (3) gross primary production (GPP); (4) climate data; and (5) soil data. We compared the results of the GWRFR with those of the other five models. The results show that the GWRFR with full length features (R2 = 0.90 and RMSE = 0.764 MT/ha) outperforms other machine learning algorithms. For individual categories of features such as GPP, vegetation indices, climate, and soil features, the GWRFR also outperforms other models. The Moran’s I value of the residuals generated by GWRFR is smaller than that of other models, which shows that GWRFR can better address the spatial non-stationarity issue. The proposed method in this article can also be potentially used to improve yield prediction for other types of crops in other regions

    Forest Conservation with Deep Learning: A Deeper Understanding of Human Geography around the Betampona Nature Reserve, Madagascar

    No full text
    Documenting the impacts of climate change and human activities on tropical rainforests is imperative for protecting tropical biodiversity and for better implementation of REDD+ and UN Sustainable Development Goals. Recent advances in very high-resolution satellite sensor systems (i.e., WorldView-3), computing power, and machine learning (ML) have provided improved mapping of fine-scale changes in the tropics. However, approaches so far focused on feature extraction or the extensive tuning of ML parameters, hindering the potential of ML in forest conservation mapping by not using textural information, which is found to be powerful for many applications. Additionally, the contribution of shortwave infrared (SWIR) bands in forest cover mapping is unknown. The objectives were to develop end-to-end mapping of the tropical forest using fully convolution neural networks (FCNNs) with WorldView-3 (WV-3) imagery and to evaluate human impact on the environment using the Betampona Nature Reserve (BNR) in Madagascar as the test site. FCNN (U-Net) using spatial/textural information was implemented and compared with feature-fed pixel-based methods including Support Vector Machine (SVM), Random Forest (RF), and Deep Neural Network (DNN). Results show that the FCNN model outperformed other models with an accuracy of 90.9%, while SVM, RF, and DNN provided accuracies of 88.6%, 84.8%, and 86.6%, respectively. When SWIR bands were excluded from the input data, FCNN provided superior performance over other methods with a 1.87% decrease in accuracy, while the accuracies of other models—SVM, RF, and DNN—decreased by 5.42%, 3.18%, and 8.55%, respectively. Spatial–temporal analysis showed a 0.7% increase in Evergreen Forest within the BNR and a 32% increase in tree cover within residential areas likely due to forest regeneration and conservation efforts. Other effects of conservation efforts are also discussed

    Urban Tree Species Classification Using a WorldView-2/3 and LiDAR Data Fusion Approach and Deep Learning

    No full text
    Urban areas feature complex and heterogeneous land covers which create challenging issues for tree species classification. The increased availability of high spatial resolution multispectral satellite imagery and LiDAR datasets combined with the recent evolution of deep learning within remote sensing for object detection and scene classification, provide promising opportunities to map individual tree species with greater accuracy and resolution. However, there are knowledge gaps that are related to the contribution of Worldview-3 SWIR bands, very high resolution PAN band and LiDAR data in detailed tree species mapping. Additionally, contemporary deep learning methods are hampered by lack of training samples and difficulties of preparing training data. The objective of this study was to examine the potential of a novel deep learning method, Dense Convolutional Network (DenseNet), to identify dominant individual tree species in a complex urban environment within a fused image of WorldView-2 VNIR, Worldview-3 SWIR and LiDAR datasets. DenseNet results were compared against two popular machine classifiers in remote sensing image analysis, Random Forest (RF) and Support Vector Machine (SVM). Our results demonstrated that: (1) utilizing a data fusion approach beginning with VNIR and adding SWIR, LiDAR, and panchromatic (PAN) bands increased the overall accuracy of the DenseNet classifier from 75.9% to 76.8%, 81.1% and 82.6%, respectively. (2) DenseNet significantly outperformed RF and SVM for the classification of eight dominant tree species with an overall accuracy of 82.6%, compared to 51.8% and 52% for SVM and RF classifiers, respectively. (3) DenseNet maintained superior performance over RF and SVM classifiers under restricted training sample quantities which is a major limiting factor for deep learning techniques. Overall, the study reveals that DenseNet is more effective for urban tree species classification as it outperforms the popular RF and SVM techniques when working with highly complex image scenes regardless of training sample size

    Estimating Crop Seed Composition Using Machine Learning from Multisensory UAV Data

    No full text
    The pre-harvest estimation of seed composition from standing crops is imperative for field management practices and plant phenotyping. This paper presents for the first time the potential of Unmanned Aerial Vehicles (UAV)-based high-resolution hyperspectral and LiDAR data acquired from in-season stand crops for estimating seed protein and oil compositions of soybean and corn using multisensory data fusion and automated machine learning. UAV-based hyperspectral and LiDAR data was collected during the growing season (reproductive stage five (R5)) of 2020 over a soybean test site near Columbia, Missouri and a cornfield at Urbana, Illinois, USA. Canopy spectral and texture features were extracted from hyperspectral imagery, and canopy structure features were derived from LiDAR point clouds. The extracted features were then used as input variables for automated machine-learning methods available with the H2O Automated Machine-Learning framework (H2O-AutoML). The results presented that: (1) UAV hyperspectral imagery can successfully predict both the protein and oil of soybean and corn with moderate accuracies; (2) canopy structure features derived from LiDAR point clouds yielded slightly poorer estimates of crop-seed composition compared to the hyperspectral data; (3) regardless of machine-learning methods, the combination of hyperspectral and LiDAR data outperformed the predictions using a single sensor alone, with an R2 of 0.79 and 0.67 for corn protein and oil and R2 of 0.64 and 0.56 for soybean protein and oil; and (4) the H2O-AutoML framework was found to be an efficient strategy for machine-learning-based data-driven model building. Among the specific regression methods evaluated in this study, the Gradient Boosting Machine (GBM) and Deep Neural Network (NN) exhibited superior performance to other methods. This study reveals opportunities and limitations for multisensory UAV data fusion and automated machine learning in estimating crop-seed composition

    Estimating Crop Seed Composition Using Machine Learning from Multisensory UAV Data

    No full text
    The pre-harvest estimation of seed composition from standing crops is imperative for field management practices and plant phenotyping. This paper presents for the first time the potential of Unmanned Aerial Vehicles (UAV)-based high-resolution hyperspectral and LiDAR data acquired from in-season stand crops for estimating seed protein and oil compositions of soybean and corn using multisensory data fusion and automated machine learning. UAV-based hyperspectral and LiDAR data was collected during the growing season (reproductive stage five (R5)) of 2020 over a soybean test site near Columbia, Missouri and a cornfield at Urbana, Illinois, USA. Canopy spectral and texture features were extracted from hyperspectral imagery, and canopy structure features were derived from LiDAR point clouds. The extracted features were then used as input variables for automated machine-learning methods available with the H2O Automated Machine-Learning framework (H2O-AutoML). The results presented that: (1) UAV hyperspectral imagery can successfully predict both the protein and oil of soybean and corn with moderate accuracies; (2) canopy structure features derived from LiDAR point clouds yielded slightly poorer estimates of crop-seed composition compared to the hyperspectral data; (3) regardless of machine-learning methods, the combination of hyperspectral and LiDAR data outperformed the predictions using a single sensor alone, with an R2 of 0.79 and 0.67 for corn protein and oil and R2 of 0.64 and 0.56 for soybean protein and oil; and (4) the H2O-AutoML framework was found to be an efficient strategy for machine-learning-based data-driven model building. Among the specific regression methods evaluated in this study, the Gradient Boosting Machine (GBM) and Deep Neural Network (NN) exhibited superior performance to other methods. This study reveals opportunities and limitations for multisensory UAV data fusion and automated machine learning in estimating crop-seed composition

    Remote Sensing Based Spatial Statistics to Document Tropical Rainforest Transition Pathways

    No full text
    In this paper, grid cell based spatial statistics were used to quantify the drivers of land-cover and land-use change (LCLUC) and habitat degradation in a tropical rainforest in Madagascar. First, a spectral database of various land-cover and land-use information was compiled using multi-year field campaign data and photointerpretation of satellite images. Next, residential areas were extracted from IKONOS-2 and GeoEye-1 images using object oriented feature extraction (OBIA). Then, Landsat Thematic Mapper (TM) and Enhanced Thematic Mapper Plus (ETM+) data were used to generate land-cover and land-use maps from 1990 to 2011, and LCLUC maps were developed with decadal intervals and converted to 100 m vector grid cells. Finally, the causal associations between LCLUC were quantified using ordinary least square regression analysis and Moran’s I, and a forest disturbance index derived from the time series Landsat data were used to further confirm LCLUC drivers. The results showed that (1) local spatial statistical approaches were most effective at quantifying the drivers of LCLUC, and (2) the combined threats of habitat degradation in and around the reserve and increasing encroachment of invasive plant species lead to the expansion of shrubland and mixed forest within the former primary forest, which was echoed by the forest disturbance index derived from the Landsat data
    corecore